Combining Classifiers with Informational Confidence

نویسندگان

  • Stefan Jaeger
  • Huanfeng Ma
  • David S. Doermann
چکیده

We propose a new statistical method for learning normalized confidence values in multiple classifier systems. Our main idea is to adjust confidence values so that their nominal values equal the information actually conveyed. In order to do so, we assume that information depends on the actual performance of each confidence value on an evaluation set. As information measure, we use Shannon’s well-known logarithmic notion of information. With the confidence values matching their informational content, the classifier combination scheme reduces to the simple sum-rule, theoretically justifying this elementary combination scheme. In experimental evaluations for script identification, and both handwritten and printed character recognition, we achieve a consistent improvement on the best single recognition rate. We cherish the hope that our information-theoretical framework helps fill the theoretical gap we still experience in classifier combination, putting the excellent practical performance of multiple classifier systems on a more solid basis.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Confidence Evaluation for Combining Diverse Classifiers

For combining classifiers at measurement level, the diverse outputs of classifiers should be transformed to uniform measures that represent the confidence of decision, hopefully, the class probability or likelihood. This paper presents our experimental results of classifier combination using confidence evaluation. We test three types of confidences: log-likelihood, exponential and sigmoid. For ...

متن کامل

استفاده از یادگیری همبستگی منفی در بهبود کارایی ترکیب شبکه های عصبی

This paper investigates the effect of diversity caused by Negative Correlation Learning(NCL) in the combination of neural classifiers and presents an efficient way to improve combining performance. Decision Templates and Averaging, as two non-trainable combining methods and Stacked Generalization as a trainable combiner are investigated in our experiments . Utilizing NCL for diversifying the ba...

متن کامل

Combination of multiple classifiers with measurement values

This paper introduces one approach f o r the combination of classifiers, in a context that each classifier can ofler not only class labels but also Ihe corresponding measurement values. This approach is called the Linear Confidence Accumvlation method (LCA). The three steps that LCA consists of are: first, measurement values are transformed into confidence values; second, a confidence aggregati...

متن کامل

Automated pronunciation scoring using confidence scoring and landmark-based SVM

In this study, we present a pronunciation scoring method for second language learners of English (hereafter, L2 learners). This study presents a method using both confidence scoring and classifiers. Classifiers have an advantage over confidence scoring for specialization in the specific phonemes where L2 learners make frequent errors. Classifiers (Landmark-based Support Vector Machines) were tr...

متن کامل

On Adaptive Confidences for Critic-Driven Classifier Combining

When combining classifiers in order to improve the classification accuracy, precise estimation of the reliability of each member classifier can be very beneficial. One approach for estimating how confident we can be in the member classifiers’ results being correct is to use specialized critics to evaluate the classifiers’ performances. We introduce an adaptive, critic-based confidence evaluatio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008